Inequalities for the Bayes Risk
نویسندگان
چکیده
Several inequalities are presented which, in part, generalize inequalities by Weinstein and Weiss, giving rise to new lower bounds for the Bayes risk under squared error loss. Consider a Bayesian setting in which y denotes the observed data and θ denotes an unknown scalar parameter. Suppose that E [ (θ − E(θ|y)) |y ] and E [ (θ − E(θ|y)) ] exist. For any estimator δ(y) of θ, E [ (θ − δ(y))|y ] ≥ E [ (θ − E(θ|y))|y ] (1) E [ (θ − δ(y)) ] ≥ E [ (θ − E(θ|y)) ] (2) The right hand sides of (1) and (2) are the ultimate lower bounds as they are attainable. However, these bounds are often difficult to analyze or even to compute. Consequently, numerous lower bounds have been proposed that are presumably simpler to analyze and compute. A comprehensive survey of various approaches can be found in [1]. Starting off with the approach in [2], let ψ(y, θ) be any appropriately measurable function of y and θ. Invoking CauchySchwarz inequality, E [ (θ − E(θ|y)) ] ≥ E [(θ − E(θ|y))ψ(y, θ)] var [ψ(y, θ)] (3) provided that var [ψ(y, θ)] exists and is strictly positive. Combining (3) with (2), E [ (θ − δ(y)) ] ≥ E [(θ − E(θ|y))ψ(y, θ)] var [ψ(y, θ)] (4) Simliarly, E [ (θ − E(θ|y))|y ] ≥ E [(θ − E(θ|y))ψ(y, θ)| y] var [ψ(y, θ)|y] (5) provided that var [ψ(y, θ)|y] exists and is strictly positive. Hence, E [ (θ − E(θ|y)) ] ≥ E { E [(θ − E(θ|y))ψ(y, θ)| y] var [ψ(y, θ)|y] } (6) Combining (5) with (1), E [ (θ − δ(y))|y ] ≥ E [(θ − E(θ|y))ψ(y, θ)| y] var [ψ(y, θ)|y] (7) Combining (6) with (2), E [ (θ − δ(y)) ] ≥ E { E [(θ − E(θ|y))ψ(y, θ)| y] var [ψ(y, θ)|y] }
منابع مشابه
Bayes, E-Bayes and Robust Bayes Premium Estimation and Prediction under the Squared Log Error Loss Function
In risk analysis based on Bayesian framework, premium calculation requires specification of a prior distribution for the risk parameter in the heterogeneous portfolio. When the prior knowledge is vague, the E-Bayesian and robust Bayesian analysis can be used to handle the uncertainty in specifying the prior distribution by considering a class of priors instead of a single prior. In th...
متن کاملInequalities for Bayes Factors and Relative Belief Ratios
We discuss the definition of a Bayes factor, the Savage-Dickey result, and develop some inequalities relevant to Bayesian inferences. We consider the implications of these inequalities for the Bayes factor approach to hypothesis assessment. An approach to hypothesis assessment based on the computation of a Bayes factor, a measure of reliability of the Bayes factor, and the point where the Bayes...
متن کاملClassic and Bayes Shrinkage Estimation in Rayleigh Distribution Using a Point Guess Based on Censored Data
Introduction In classical methods of statistics, the parameter of interest is estimated based on a random sample using natural estimators such as maximum likelihood or unbiased estimators (sample information). In practice, the researcher has a prior information about the parameter in the form of a point guess value. Information in the guess value is called as nonsample information. Thomp...
متن کاملEmpirical Bayes Estimators for High-Dimensional Sparse Vectors
The problem of estimating a high-dimensional sparse vector θ ∈ R from an observation in i.i.d. Gaussian noise is considered. The performance is measured using squared-error loss. An empirical Bayes shrinkage estimator, derived using a Bernoulli-Gaussian prior, is analyzed and compared with the well-known soft-thresholding estimator. We obtain concentration inequalities for the Stein’s unbiased ...
متن کاملChebyshev Inequalities with Law Invariant Deviation Measures
The consistency of law invariant general deviation measures, introduced by Rockafellar et al., with concave ordering has been used to generalize Rao-Blackwell theorem and to develop an approach for reducing minimization of law invariant deviation measures to minimization of the measures on subsets of undominated random variables with respect to concave ordering. This approach has been applied f...
متن کاملNeyman-Pearson Classification under High-Dimensional Settings
Most existing binary classification methods target on the optimization of the overall classification risk and may fail to serve some real-world applications such as cancer diagnosis, where users are more concerned with the risk of misclassifying one specific class than the other. Neyman-Pearson (NP) paradigm was introduced in this context as a novel statistical framework for handling asymmetric...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1401.5187 شماره
صفحات -
تاریخ انتشار 2014